128 research outputs found
Deep learning-based segmentation of malignant pleural mesothelioma tumor on computed tomography scans: application to scans demonstrating pleural effusion
Tumor volume is a topic of interest for the prognostic assessment, treatment response evaluation, and staging of malignant pleural mesothelioma. Many mesothelioma patients present with, or develop, pleural fluid, which may complicate the segmentation of this disease. Deep convolutional neural networks (CNNs) of the two-dimensional U-Net architecture were trained for segmentation of tumor in the left and right hemithoraces, with the networks initialized through layers pretrained on ImageNet. Networks were trained on a dataset of 5230 axial sections from 154 CT scans of 126 mesothelioma patients. A test set of 94 CT sections from 34 patients, who all presented with both tumor and pleural effusion, in addition to a more general test set of 130 CT sections from 43 patients, were used to evaluate segmentation performance of the deep CNNs. The Dice similarity coefficient (DSC), average Hausdorff distance, and bias in predicted tumor area were calculated through comparisons with radiologist-provided tumor segmentations on the test sets. The present method achieved a median DSC of 0.690 on the tumor and effusion test set and achieved significantly higher performance on both test sets when compared with a previous deep learning-based segmentation method for mesothelioma
Enhancing the Prediction of Lung Cancer Survival Rates Using 2D Features from 3D Scans
Author's accepted manuscript.Available from 18/06/2021.acceptedVersio
Uncertainty quantification in medical image segmentation with normalizing flows
Medical image segmentation is inherently an ambiguous task due to factors
such as partial volumes and variations in anatomical definitions. While in most
cases the segmentation uncertainty is around the border of structures of
interest, there can also be considerable inter-rater differences. The class of
conditional variational autoencoders (cVAE) offers a principled approach to
inferring distributions over plausible segmentations that are conditioned on
input images. Segmentation uncertainty estimated from samples of such
distributions can be more informative than using pixel level probability
scores. In this work, we propose a novel conditional generative model that is
based on conditional Normalizing Flow (cFlow). The basic idea is to increase
the expressivity of the cVAE by introducing a cFlow transformation step after
the encoder. This yields improved approximations of the latent posterior
distribution, allowing the model to capture richer segmentation variations.
With this we show that the quality and diversity of samples obtained from our
conditional generative model is enhanced. Performance of our model, which we
call cFlow Net, is evaluated on two medical imaging datasets demonstrating
substantial improvements in both qualitative and quantitative measures when
compared to a recent cVAE based model.Comment: 12 pages. Accepted to be presented at 11th International Workshop on
Machine Learning in Medical Imaging. Source code will be updated at
https://github.com/raghavian/cFlo
Novel Block Diagonalization for Reducing Features and Computations in Medical Diagnosis
Author's accepted manuscript.Available from 28/11/2021.acceptedVersio
Learning Visual Context by Comparison
Finding diseases from an X-ray image is an important yet highly challenging
task. Current methods for solving this task exploit various characteristics of
the chest X-ray image, but one of the most important characteristics is still
missing: the necessity of comparison between related regions in an image. In
this paper, we present Attend-and-Compare Module (ACM) for capturing the
difference between an object of interest and its corresponding context. We show
that explicit difference modeling can be very helpful in tasks that require
direct comparison between locations from afar. This module can be plugged into
existing deep learning models. For evaluation, we apply our module to three
chest X-ray recognition tasks and COCO object detection & segmentation tasks
and observe consistent improvements across tasks. The code is available at
https://github.com/mk-minchul/attend-and-compare.Comment: ECCV 2020 spotlight pape
CASED: Curriculum Adaptive Sampling for Extreme Data Imbalance
We introduce CASED, a novel curriculum sampling algorithm that facilitates
the optimization of deep learning segmentation or detection models on data sets
with extreme class imbalance. We evaluate the CASED learning framework on the
task of lung nodule detection in chest CT. In contrast to two-stage solutions,
wherein nodule candidates are first proposed by a segmentation model and
refined by a second detection stage, CASED improves the training of deep nodule
segmentation models (e.g. UNet) to the point where state of the art results are
achieved using only a trivial detection stage. CASED improves the optimization
of deep segmentation models by allowing them to first learn how to distinguish
nodules from their immediate surroundings, while continuously adding a greater
proportion of difficult-to-classify global context, until uniformly sampling
from the empirical data distribution. Using CASED during training yields a
minimalist proposal to the lung nodule detection problem that tops the LUNA16
nodule detection benchmark with an average sensitivity score of 88.35%.
Furthermore, we find that models trained using CASED are robust to nodule
annotation quality by showing that comparable results can be achieved when only
a point and radius for each ground truth nodule are provided during training.
Finally, the CASED learning framework makes no assumptions with regard to
imaging modality or segmentation target and should generalize to other medical
imaging problems where class imbalance is a persistent problem.Comment: 20th International Conference on Medical Image Computing and Computer
Assisted Intervention 201
Hierarchical Classification of Pulmonary Lesions: A Large-Scale Radio-Pathomics Study
Diagnosis of pulmonary lesions from computed tomography (CT) is important but
challenging for clinical decision making in lung cancer related diseases. Deep
learning has achieved great success in computer aided diagnosis (CADx) area for
lung cancer, whereas it suffers from label ambiguity due to the difficulty in
the radiological diagnosis. Considering that invasive pathological analysis
serves as the clinical golden standard of lung cancer diagnosis, in this study,
we solve the label ambiguity issue via a large-scale radio-pathomics dataset
containing 5,134 radiological CT images with pathologically confirmed labels,
including cancers (e.g., invasive/non-invasive adenocarcinoma, squamous
carcinoma) and non-cancer diseases (e.g., tuberculosis, hamartoma). This
retrospective dataset, named Pulmonary-RadPath, enables development and
validation of accurate deep learning systems to predict invasive pathological
labels with a non-invasive procedure, i.e., radiological CT scans. A
three-level hierarchical classification system for pulmonary lesions is
developed, which covers most diseases in cancer-related diagnosis. We explore
several techniques for hierarchical classification on this dataset, and propose
a Leaky Dense Hierarchy approach with proven effectiveness in experiments. Our
study significantly outperforms prior arts in terms of data scales (6x larger),
disease comprehensiveness and hierarchies. The promising results suggest the
potentials to facilitate precision medicine.Comment: MICCAI 2020 (Early Accepted
Deep Learning with Lung Segmentation and Bone Shadow Exclusion Techniques for Chest X-Ray Analysis of Lung Cancer
The recent progress of computing, machine learning, and especially deep
learning, for image recognition brings a meaningful effect for automatic
detection of various diseases from chest X-ray images (CXRs). Here efficiency
of lung segmentation and bone shadow exclusion techniques is demonstrated for
analysis of 2D CXRs by deep learning approach to help radiologists identify
suspicious lesions and nodules in lung cancer patients. Training and validation
was performed on the original JSRT dataset (dataset #01), BSE-JSRT dataset,
i.e. the same JSRT dataset, but without clavicle and rib shadows (dataset #02),
original JSRT dataset after segmentation (dataset #03), and BSE-JSRT dataset
after segmentation (dataset #04). The results demonstrate the high efficiency
and usefulness of the considered pre-processing techniques in the simplified
configuration even. The pre-processed dataset without bones (dataset #02)
demonstrates the much better accuracy and loss results in comparison to the
other pre-processed datasets after lung segmentation (datasets #02 and #03).Comment: 10 pages, 7 figures; The First International Conference on Computer
Science, Engineering and Education Applications (ICCSEEA2018)
(www.uacnconf.org/iccseea2018) (accepted
Robust Fusion of Probability Maps
International audienceThe fusion of probability maps is required when trying to analyse a collection of image labels or probability maps produced by several segmentation algorithms or human raters. The challenge is to weight properly the combination of maps in order to reflect the agreement among raters, the presence of outliers and the spatial uncertainty in the consensus. In this paper, we address several shortcomings of prior work in continuous label fusion. We introduce a novel approach to jointly estimate a reliable consensus map and assess the production of outliers and the confidence in each rater. Our probabilistic model is based on Student's t-distributions allowing local estimates of raters' performances. The introduction of bias and spatial priors leads to proper rater bias estimates and a control over the smoothness of the consensus map. Image intensity information is incorporated by geodesic distance transform for binary masks. Finally, we propose an approach to cluster raters based on variational boosting thus producing possibly several alternative consensus maps. Our approach was successfully tested on the MICCAI 2016 MS lesions dataset, on MR prostate delineations and on deep learning based segmentation predictions of lung nodules from the LIDC dataset
- …